7 research outputs found

    Context Based Classification of Reviews Using Association Rule Mining, Fuzzy Logics and Ontology

    Get PDF
    The Internet has facilitated the growth of recommendation system owing to the ease of sharing customer experiences online. It is a challenging task to summarize and streamline the online textual reviews. In this paper, we propose a new framework called Fuzzy based contextual recommendation system. For classification of customer reviews we extract the information from the reviews based on the context given by users. We use text mining techniques to tag the review and extract context. Then we find out the relationship between the contexts from the ontological database. We incorporate fuzzy based semantic analyzer to find the relationship between the review and the context when they are not found therein. The sentence based classification predicts the relevant reviews, whereas the fuzzy based context method predicts the relevant instances among the relevant reviews. Textual analysis is carried out with the combination of association rules and ontology mining. The relationship between review and their context is compared using the semantic analyzer which is based on the fuzzy rules

    A novel end-to-end deep convolutional neural network based skin lesion classification framework

    Get PDF
    Background:Skin diseases are reported to contribute 1.79% of the global burden of disease. The accurate diagnosis of specific skin diseases is known to be a challenging task due, in part, to variations in skin tone, texture, body hair, etc. Classification of skin lesions using machine learning is a demanding task, due to the varying shapes, sizes, colors, and vague boundaries of some lesions. The use of deep learning for the classification of skin lesion images has been shown to help diagnose the disease at its early stages. Recent studies have demonstrated that these models perform well in skin detection tasks, with high accuracy and efficiency.Objective:Our paper proposes an end-to-end framework for skin lesion classification, and our contributions are two-fold. Firstly, two fundamentally different algorithms are proposed for segmenting and extracting features from images during image preprocessing. Secondly, we present a deep convolutional neural network model, S-MobileNet that aims to classify 7 different types of skin lesions.Methods:We used the HAM10000 dataset, which consists of 10000 dermatoscopic images from different populations and is publicly available through the International Skin Imaging Collaboration (ISIC) Archive. The image data was preprocessed to make it suitable for modeling. Exploratory data analysis (EDA) was performed to understand various attributes and their relationships within the dataset. A modified version of a Gaussian filtering algorithm and SFTA was applied for image segmentation and feature extraction. The processed dataset was then fed into the S-MobileNet model. This model was designed to be lightweight and was analysed in three dimensions: using the Relu Activation function, the Mish activation function, and applying compression at intermediary layers. In addition, an alternative approach for compressing layers in the S-MobileNet architecture was applied to ensure a lightweight model that does not compromise on performance.Results:The model was trained using several experiments and assessed using various performance measures, including, loss, accuracy, precision, and the F1-score. Our results demonstrate an improvement in model performance when applying a preprocessing technique. The Mish activation function was shown to outperform Relu. Further, the classification accuracy of the compressed S-MobileNet was shown to outperform S-MobileNet.Conclusions:To conclude, our findings have shown that our proposed deep learning-based S-MobileNet model is the optimal approach for classifying skin lesion images in the HAM10000 dataset. In the future, our approach could be adapted and applied to other datasets, and validated to develop a skin lesion framework that can be utilised in real-time

    Equilibrium, Thermodynamic, and Kinetic Studies

    Get PDF
    Publisher Copyright: © 2022 Razia Sulthana et al.The economic viability of adsorbing crystal violet (CV) using pepper seed spent (PSS) as a biosorbent in an aqueous solution has been studied. A parametrical investigation was conducted considering parameters like initial concentration of dye, time of contact, pH value, and temperature variation. The analysis of experimental data obtained was carried out by evaluating with the isotherms of Freundlich, Sips, Tempkin, Jovanovic, Brouers-Sotolongo, Toth, Vieth-Sladek, Radke-Prausnitz, Langmuir, and Redlich-Peterson. The adsorption kinetics were studied by implementing the Dumwald-Wagner, Weber-Morris, pseudo-first-order, pseudo-second-order, film diffusion, and Avrami models. The experimental value of adsorption capacity (Qm=129.4 mg g-1) was observed to be quite close to the Jovanovic isotherm adsorption capacity (Qm=82.24 mg g-1) at (R2), coefficient of correlation of 0.945. The data validation was found to conform to that of pseudo-second-order and Avrami kinetic models. The adsorption process was specified as a spontaneous and endothermic process owing to the thermodynamic parametrical values of ΔG0, ΔH0, and ΔS0. The value of ΔH0 is an indicator of the process's physical nature. The adsorption of CV to the PSS was authenticated from infrared spectroscopy and scanning electron microscopy images. The interactions of the CV-PSS system have been discussed, and the observations noted suggest PSS as a feasible adsorbent to extract CV from an aqueous solution.publishersversionpublishe

    A Review of Trustworthy and Explainable Artificial Intelligence (XAI)

    No full text
    The advancement of Artificial Intelligence (AI) technology has accelerated the development of several systems that are elicited from it. This boom has made the systems vulnerable to security attacks and allows considerable bias in order to handle errors in the system. This puts humans at risk and leaves machines, robots, and data defenseless. Trustworthy AI (TAI) guarantees human value and the environment. In this paper, we present a comprehensive review of the state-of-the-art on how to build a Trustworthy and eXplainable AI, taking into account that AI is a black box with little insight into its underlying structure. The paper also discusses various TAI components, their corresponding bias, and inclinations that make the system unreliable. The study also discusses the necessity for TAI in many verticals, including banking, healthcare, autonomous system, and IoT. We unite the ways of building trust in all fragmented areas of data protection, pricing, expense, reliability, assurance, and decision-making processes utilizing TAI in several diverse industries and to differing degrees. It also emphasizes the importance of transparent and post hoc explanation models in the construction of an eXplainable AI and lists the potential drawbacks and pitfalls of building eXplainable AI. Finally, the policies for developing TAI in the autonomous vehicle construction sectors are thoroughly examined and eclectic ways of building a reliable, interpretable, eXplainable, and Trustworthy AI systems are explained to guarantee safe autonomous vehicle systems
    corecore